Convergence of Least Squares Learning Mechanisms in Self- Referential Linear Stochastic Models*

نویسنده

  • ALBERT MARCET
چکیده

We study a class of models in which the law of motion perceived by agents influences the law of motion that they actually face. We assume that agents update their perceived law of motion by least squares. We show how the perceived law of motion and the actual one may converge to one another, depending on the behavior of a particular ordinary differential equation. The differential equation involves the operator that maps the perceived law of motion into the actual one. Journal of Economic Literature Classification Numbers: 021, 023, 211.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Convergence of Least Squares Learning in Self-Referential Discontinuous Stochastic Models

We examine the stability of rational expectations equilibria in the class of models in which the decision of the individual agent is discontinuous with respect to the state variables. Instead of rational expectations, each agent learns the unknown parameters through a recursive stochastic algorithm. If the agents the estimated value function ``rapidly'' enough, then each agent learns the true v...

متن کامل

Asymptotic Properties of Nonlinear Least Squares Estimates in Stochastic Regression Models Over a Finite Design Space. Application to Self-Tuning Optimisation

We present new conditions for the strong consistency and asymptotic normality of the least squares estimator in nonlinear stochastic models when the design variables vary in a finite set. The application to self-tuning optimisation is considered, with a simple adaptive strategy that guarantees simultaneously the convergence to the optimum and the strong consistency of the estimates of the model...

متن کامل

Stochastic Adaptive Switching Control Based on Multiple Models

It is well known that the transient behaviors of the traditional adaptive control may be very poor in general, and that the adaptive control designed based on switching between multiple models is an intuitively appealing and practically feasible approach to improve the transient performances. This paper proves that for a typical class of linear systems disturbed by white noises, the multiple mo...

متن کامل

Linear Convergence of Proximal-Gradient Methods under the Polyak-Łojasiewicz Condition

In 1963, Polyak proposed a simple condition that is sufficient to show that gradient descent has a global linear convergence rate. This condition is a special case of the Łojasiewicz inequality proposed in the same year, and it does not require strong-convexity (or even convexity). In this work, we show that this much-older Polyak-Łojasiewicz (PL) inequality is actually weaker than the four mai...

متن کامل

Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-\L{}ojasiewicz Condition

In 1963, Polyak proposed a simple condition that is sufficient to show a global linear convergence rate for gradient descent. This condition is a special case of the Lojasiewicz inequality proposed in the same year, and it does not require strong convexity (or even convexity). In this work, we show that this much-older PolyakLojasiewicz (PL) inequality is actually weaker than the main condition...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2003